Search results for "Natural language understanding"
showing 6 items of 6 documents
L'estrazione automatica dei ruoli semantici corradicali. The importance of being Cognate
2022
This study will introduce a tool – termed NLPYTALY – for the automatictreatment of naturally-occurring texts in Italian. The tool distinguishesconstructs with an ordinary verb from those with a support verb. Mean-ing is rendered by using cognate (i.e. etymologically related) semantic roles (CSR), which differ from other roles because they are expressed withthe content morpheme of the predicate licensing arguments. CSRs offer anumber of advantages: they can be semi-automatically derived and use the who-does-what model. Besides, they facilitate the detection of anaphoric chains and produce a foreground/background opposition of the namedentities. Finally, they permit the construction of a chro…
GAIML: A New Language for Verbal and Graphical Interaction in Chatbots
2008
Natural and intuitive interaction between users and complex systems is a crucial research topic in human-computer interaction. A major direction is the definition and implementation of systems with natural language understanding capabilities. The interaction in natural language is often performed by means of systems called chatbots. A chatbot is a conversational agent with a proper knowledge base able to interact with users. Chatbots appearance can be very sophisticated with 3D avatars and speech processing modules. However the interaction between the system and the user is only performed through textual areas for inputs and replies. An interaction able to add to natural language also graph…
Extending the Tsetlin Machine With Integer-Weighted Clauses for Increased Interpretability
2020
Despite significant effort, building models that are both interpretable and accurate is an unresolved challenge for many pattern recognition problems. In general, rule-based and linear models lack accuracy, while deep learning interpretability is based on rough approximations of the underlying inference. Using a linear combination of conjunctive clauses in propositional logic, Tsetlin Machines (TMs) have shown competitive performance on diverse benchmarks. However, to do so, many clauses are needed, which impacts interpretability. Here, we address the accuracy-interpretability challenge in machine learning by equipping the TM clauses with integer weights. The resulting Integer Weighted TM (…
Using the Tsetlin Machine to Learn Human-Interpretable Rules for High-Accuracy Text Categorization With Medical Applications
2019
Medical applications challenge today's text categorization techniques by demanding both high accuracy and ease-of-interpretation. Although deep learning has provided a leap ahead in accuracy, this leap comes at the sacrifice of interpretability. To address this accuracy-interpretability challenge, we here introduce, for the first time, a text categorization approach that leverages the recently introduced Tsetlin Machine. In all brevity, we represent the terms of a text as propositional variables. From these, we capture categories using simple propositional formulae, such as: if "rash" and "reaction" and "penicillin" then Allergy. The Tsetlin Machine learns these formulae from a labelled tex…
RDF* Graph Database as Interlingua for the TextWorld Challenge
2019
This paper briefly describes the top-scoring submission to the First TextWorld Problems: A Reinforcement and Language Learning Challenge. To alleviate the partial observability problem, characteristic to the TextWorld games, we split the Agent into two independent components: Observer and Actor, communicating only via the Interlingua of the RDF* graph database. The RDF* graph database serves as the “world model” memory incrementally updated by the Observer via FrameNet informed Natural Language Understanding techniques and is used by the Actor for the efficient exploration and planning of the game Action sequences. We find that the deep-learning approach works best for the Observer componen…
Improving Assessment of Students through Semantic Space Construction
2009
Assessment is one of the hardest tasks an Intelligent Tutoring System has to perform. It involves different and sometimes uncorrelated sub-tasks: building a student model to define her needs, defining tools and procedures to perform tests, understanding students' replies to system prompts, defining suitable procedures to evaluate the correctness of students' replies, and strategies to improve students' abilities after the assessment session.In this work we present an improvement of our system, TutorJ, with particular attention to the assessment phase. Many tutoring systems offer only a limited set of assessment options like multiple-choice questions,fill-in-the-blanks tests or other types …